Skip to content

Conversation

@rka97
Copy link
Contributor

@rka97 rka97 commented Apr 3, 2025

This is for the LM workload.

@rka97
Copy link
Contributor Author

rka97 commented Oct 21, 2025

Adding some TODOs:

  • Add model matching tests (i.e. for the same inputs, outputs should be the same, kinda like model_match)
  • Fix initialization to be the same for both PyTorch and JAX (there are some minor differences it seems)
  • Add integration test for the lm workload to github actions

Copy link
Contributor

@priyakasimbeg priyakasimbeg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Second round of small requested changes.

Perhaps something we should discuss, we need a more descriptive name for the workload. E.g. fineweb_edu_lm. What do you all think? @Niccolo-Ajroldi @rka97

@@ -0,0 +1,397 @@
"""
Originally based on code from the NanoDO repository under the Apache 2.0 license:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we rename this file to models.py to be consistent with the pattern in the other workload definitions.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@@ -0,0 +1,344 @@
"""
Originally based on the plainLM codebase:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we rename this file to models.py to be consistent with the other workload definitions

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

'workload_path': 'librispeech_deepspeech/librispeech',
'workload_class_name': 'LibriSpeechDeepSpeechNormAndSpecAugWorkload',
},
'lm': {'workload_path': 'lm/lm', 'workload_class_name': 'LmWorkload'},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now that we have all the important implementation details figured out should we pick a more descriptive name for the workload? I am thinking perhaps 'fineweb_edu_lm'?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fineweb_edu_lm or finewebedu_lm make sense, matching the other workload names.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed to finewebedu_lm.

elif workload_name == 'mnist':
return 16
elif workload_name == 'lm':
return 4
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should work for bsz 64 right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I just made it smaller because I was debugging with v100s. Changed it back to 64.

elif workload_name == 'cifar':
return 128
elif workload_name == 'lm':
return 8
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should work for bsz 64 right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I just made it smaller because I was debugging with v100s. Changed it back to 64.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants